59 research outputs found

    Multimodal Grounding for Language Processing

    Get PDF
    This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language.Comment: The paper has been published in the Proceedings of the 27 Conference of Computational Linguistics. Please refer to this version for citations: https://www.aclweb.org/anthology/papers/C/C18/C18-1197

    Analyzing Cognitive Plausibility of Subword Tokenization

    Get PDF
    Subword tokenization has become the de-facto standard for tokenization, although comparative evaluations of subword vocabulary quality across languages are scarce. Existing evaluation studies focus on the effect of a tokenization algorithm on the performance in downstream tasks, or on engineering criteria such as the compression rate. We present a new evaluation paradigm that focuses on the cognitive plausibility of subword tokenization. We analyze the correlation of the tokenizer output with the response time and accuracy of human performance on a lexical decision task. We compare three tokenization algorithms across several languages and vocabulary sizes. Our results indicate that the UnigramLM algorithm yields less cognitively plausible tokenization behavior and a worse coverage of derivational morphemes, in contrast with prior work

    Cross-Lingual Transfer of Cognitive Processing Complexity

    Get PDF
    When humans read a text, their eye movements are influenced by the structural complexity of the input sentences. This cognitive phenomenon holds across languages and recent studies indicate that multilingual language models utilize structural similarities between languages to facilitate cross-lingual transfer. We use sentence-level eye-tracking patterns as a cognitive indicator for structural complexity and show that the multilingual model XLM-RoBERTa can successfully predict varied patterns for 13 typologically diverse languages, despite being fine-tuned only on English data. We quantify the sensitivity of the model to structural complexity and distinguish a range of complexity characteristics. Our results indicate that the model develops a meaningful bias towards sentence length but also integrates cross-lingual differences. We conduct a control experiment with randomized word order and find that the model seems to additionally capture more complex structural information

    Probing Multilingual BERT for Genetic and Typological Signals

    Full text link
    We probe the layers in multilingual BERT (mBERT) for phylogenetic and geographic language signals across 100 languages and compute language distances based on the mBERT representations. We 1) employ the language distances to infer and evaluate language trees, finding that they are close to the reference family tree in terms of quartet tree distance, 2) perform distance matrix regression analysis, finding that the language distances can be best explained by phylogenetic and worst by structural factors and 3) present a novel measure for measuring diachronic meaning stability (based on cross-lingual representation variability) which correlates significantly with published ranked lists based on linguistic approaches. Our results contribute to the nascent field of typological interpretability of cross-lingual text representations.Comment: COLING 202

    Dynamic Top-k Estimation Consolidates Disagreement between Feature Attribution Methods

    Get PDF
    Feature attribution scores are used for explaining the prediction of a text classifier to users by highlighting a k number of tokens. In this work, we propose a way to determine the number of optimal k tokens that should be displayed from sequential properties of the attribution scores. Our approach is dynamic across sentences, method-agnostic, and deals with sentence length bias. We compare agreement between multiple methods and humans on an NLI task, using fixed k and dynamic k. We find that perturbation-based methods and Vanilla Gradient exhibit highest agreement on most method--method and method--human agreement metrics with a static k. Their advantage over other methods disappears with dynamic ks which mainly improve Integrated Gradient and GradientXInput. To our knowledge, this is the first evidence that sequential properties of attribution scores are informative for consolidating attribution signals for human interpretation

    Predicting and Manipulating the Difficulty of Text-Completion Exercises for Language Learning

    Get PDF
    The increasing levels of international communication in all aspects of life lead to a growing demand of language skills. Traditional language courses compete nowadays with a wide range of online offerings that promise higher flexibility. However, most platforms provide rather static educational content and do not yet incorporate the recent progress in educational natural language processing. In the last years, many researchers developed new methods for automatic exercise generation, but the generated output is often either too easy or too difficult to be used with real learners. In this thesis, we address the task of predicting and manipulating the difficulty of text-completion exercises based on measurable linguistic properties to bridge the gap between technical ambition and educational needs. The main contribution consists of a theoretical model and a computational implementation for exercise difficulty prediction on the item level. This is the first automatic approach that reaches human performance levels and is applicable to various languages and exercise types. The exercises in this thesis differ with respect to the exercise content and the exercise format. As theoretical basis for the thesis, we develop a new difficulty model that combines content and format factors and further distinguishes the dimensions of text difficulty, word difficulty, candidate ambiguity, and item dependency. It is targeted at text-completion exercises that are a common method for fast language proficiency tests. The empirical basis for the thesis consists of five difficulty datasets containing exercises annotated with learner performance data. The difficulty is expressed as the ratio of learners who fail to solve the exercise. In order to predict the difficulty for unseen exercises, we implement the four dimensions of the model as computational measures. For each dimension, the thesis contains the discussion and implementation of existing measures, the development of new approaches, and an experimental evaluation on sub-tasks. In particular, we developed new approaches for the tasks of cognate production, spelling difficulty prediction, and candidate ambiguity evaluation. For the main experiments, the individual measures are combined into an machine learning approach to predict the difficulty of C-tests, X-tests and cloze tests in English, German, and French. The performance of human experts on the same task is determined by conducting an annotation study to provide a basis for comparison. The quality of the automatic prediction reaches the levels of human accuracy for the largest datasets. If we can predict the difficulty of exercises, we are able to manipulate the difficulty. We develop a new approach for exercise generation and selection that is based on the prediction model. It reaches high acceptance ratings by human users and can be directly integrated into real-world scenarios. In addition, the measures for word difficulty and candidate ambiguity are used to improve the tasks of content and distractor manipulation. Previous work for exercise difficulty was commonly limited to manual correlation analyses using learner results. The computational approach of this thesis makes it possible to predict the difficulty of text-completion exercises in advance. This is an important contribution towards the goal of completely automated exercise generation for language learning

    Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze

    Get PDF
    When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential cross-modal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-the-art image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially\textit{sequentially}. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more natural−{-}particularly when gaze is encoded with a dedicated recurrent component.Comment: In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020

    Patterns of Text Readability in Human and Predicted Eye Movements

    Full text link
    It has been shown that multilingual transformer models are able to predict human reading behavior when fine-tuned on small amounts of eye tracking data. As the cumulated prediction results do not provide insights into the linguistic cues that the model acquires to predict reading behavior, we conduct a deeper analysis of the predictions from the perspective of readability. We try to disentangle the three-fold relationship between human eye movements, the capability of language models to predict these eye movement patterns, and sentence-level readability measures for English. We compare a range of model configurations to multiple baselines. We show that the models exhibit difficulties with function words and that pre-training only provides limited advantages for linguistic generalization
    • …
    corecore